Publisher: | Tordotcom |
Copyright: | 2021 |
ISBN: | 1-250-30112-2 |
Format: | Kindle |
Pages: | 151 |
I'm awfully lucky when you think about it. Garbagetown is the most wonderful place anybody has ever lived in the history of the world, even if you count the Pyramids and New York City and Camelot. I have Grape Crush and Big Bargains and my hibiscus flower, and I can fish like I've got bait for a heart so I hardly ever go hungry, and once I found a ruby ring and a New Mexico license plate inside a bluefin tuna. Everyone says they only hate me because I annihilated hope and butchered our future, but I know better, and anyway, it's a lie. Some people are just born to be despised. The Loathing of Tetley began small and grew bigger and bigger, like the Thames, until it swallowed me whole.Garbagetown is a giant floating pile of garbage in the middle of the ocean, and it is, so far as anyone knows, the only "land" left in the world. Global warming has flooded everything until the remaining Fuckwits (as their future descendants call everyone who was alive at the time) took to the Misery Boats and searched hopelessly for land. Eventually they realized they could live on top of the now-massive Pacific Garbage Patch and began the Great Sorting, which is fifty years into Tetley's past. All of the types of garbage were moved into their own areas, allowing small micronations of scavengers to form and giving each area its own character.
Candle Hole is the most beautiful place in Garbagetown, which is the most beautiful place in the world. All of the stubs of candles the Fuckwits threw out piled up into hills and mountains and caverns and dells, votive candles and taper candles and tea lights and birthday candles and big fat colorful pillar candles, stacked and somewhat melted into a great crumbling gorgeous warren of wicks and wax. All the houses are cozy little honeycombs melted into hillside, with smooth round windows and low golden ceilings. At night, from far away, Candle Hole looks like a firefly palace. When the wind blows, it smells like cinnamon, and freesia, and cranberries, and lavender, and Fresh Linen Scent, and New Car Smell.Two things should be obvious from this introduction. First, do not read this book looking for an accurate, technical projection of our environmental future. Or, for that matter, physical realism of any kind. That's not the book that Valente is writing and you'll just frustrate yourself. This is science fiction as concretized metaphor rather than prediction or scientific exploration. We Fuckwits have drowned the world with our greed and left our descendants living in piles of our garbage; you have to suspend disbelief and go with the premise. The second thing is that either you will like Tetley's storytelling style or you will not like this book. I find Valente very hit-and-miss, but this time it worked for me. The language is a bit less over-the-top than Space Opera, and it fits Tetley's insistent, aggressive optimism so well that it carries much of the weight of characterization. Mileage will definitely vary; this is probably a love-it-or-hate-it book. The Past is Red is divided into two parts. The first part is the short story "The Future is Blue," previously published in Clarkesworld and in Valente's short story collection of the same name. It tells the story of Tetley's early life, how she got her name, and how she became the most hated person in Garbagetown. The second part is much longer and features an older, quieter, more thoughtful, and somewhat more cynical Tetley, more life philosophy, and a bit of more-traditional puzzle science fiction. It lacks some of the bubbly energy of "The Future is Blue" but also features less violence and abuse. The overall work is a long novella or very short novel. This book has a lot of feelings about the environment, capitalism, greed, and the desire to let other people solve your problems for you, and it is not subtle about any of them. It's satisfying in the way that a good rant is satisfying, not in the way that a coherent political strategy is satisfying. What saves it from being too didactic or self-righteous is Tetley, who is happy to record her own emotions and her moments of wonder and is mostly uninterested in telling other people what to do. The setting sounds darkly depressing, and there are moments where it feels that way in the book, but the core of the story and of Tetley's life philosophy is a type of personal resilience: find the things that make you happy, put one foot in front of the other, and enjoy the world for what it is rather than what it could be or what other people want to convince you it might be. It's also surprisingly funny, particularly if you see the humor in bizarrely-specific piles of the detritus of civilization. The one place where I will argue with Valente a bit is that The Past is Red thoroughly embraces an environmental philosophy of personal responsibility. The devastating critique aimed at the Fuckwits is universal and undistinguished except slightly by class. Tetley and the other inhabitants of Garbagetown make no distinction between types of Fuckits or attempt to apportion blame in any way more granular than entire generations and eras. This is probably realistic. I understand why, by Tetley's time, no one is interested in the fine points of history. But the story was written today, for readers in our time, and this type of responsibility collapse is intentionally and carefully constructed by the largest polluters and the people with the most power. Collective and undifferentiated responsibility means that we're using up our energy fretting about whether we took two showers, which partly deflects attention from the companies, industries, and individuals that are directly responsible for the vast majority of environmental damage. We don't live in a world full of fuckwits; we live in a world run by fuckwits and full of the demoralized, harried, conned, manipulated, overwhelmed, and apathetic, which is a small but important difference. This book is not the right venue to explore that difference, but I wish the vitriol had not been applied quite so indiscriminately. The rest, though, worked for me. Valente tends to describe things by piling clauses on top of adjectives, which objectively isn't the best writing but it fits everything about Tetley's personality so well that I think this is the book where it works. I found her strange mix of optimism, practicality, and unbreakable inner ethics oddly endearing. "The Future is Blue" is available for free on-line, so if in doubt, read some or all of it, and that should tell you whether you're interested in the expansion. I'm glad I picked it up. Content warning for physical and sexual abuse of the first-person protagonist, mostly in the first section. Rating: 7 out of 10
launchpadlib
, which were ported
years ago). As such, we weren t trying to do this with the internet having
Strong Opinions at us. We were doing this because it was obviously the only
long-term-maintainable path forward, and in more recent times because some
of our library dependencies were starting to drop support for Python 2 and
so it was obviously going to become a practical problem for us sooner or
later; but if we d just stayed on Python 2 forever then fundamentally hardly
anyone else would really have cared directly, only maybe about some indirect
consequences of that. I don t follow Mercurial development so I may be
entirely off-base, but if other people were yelling at me about how late my
project was to finish its port, that in itself would make me feel more
negatively about the project even if I thought it was a good idea. Having
most of the pressure come from ourselves rather than from outside meant that
wasn t an issue for us.
I m somewhat inclined to think of the process as an extreme version of
paying down technical debt. Moving from Python 2.7 to 3.5, as we just did,
means skipping over multiple language versions in one go, and if similar
changes had been made more gradually it would probably have felt a lot more
like the typical dependency update treadmill. I appreciate why not everyone
might want to think of it this way: maybe this is just my own rationalization.
Reflections on porting to Python 3
I m not going to defend the Python 3 migration process; it was pretty rough
in a lot of ways. Nor am I going to spend much effort relitigating it here,
as it s already been done to death elsewhere, and as I understand it the
core Python developers have got the message loud and clear by now. At a
bare minimum, a lot of valuable time was lost early in Python 3 s lifetime
hanging on to flag-day-type porting strategies that were impractical for
large projects, when it should have been providing for bilingual
strategies (code that runs in both Python 2 and 3 for a transitional period)
which is where most libraries and most large migrations ended up in
practice. For instance, the early advice to library maintainers to maintain
two parallel versions or perhaps translate dynamically with 2to3
was
entirely impractical in most non-trivial cases and wasn t what most people
ended up doing, and yet the idea that 2to3
is all you need still floats
around Stack Overflow and the like as a result. (These days, I would
probably point people towards something more like Eevee s porting
FAQ
as somewhere to start.)
There are various fairly straightforward things that people often suggest
could have been done to smooth the path, and I largely agree: not removing
the u''
string prefix only to put it back in 3.3, fewer gratuitous
compatibility breaks in the name of tidiness, and so on. But if I had a
time machine, the number one thing I would ask to have been done differently
would be introducing type annotations in Python 2 before Python 3 branched
off. It s true that it s technically
possible
to do type annotations in Python 2, but the fact that it s a different
syntax that would have to be fixed later is offputting, and in practice it
wasn t widely used in Python 2 code. To make a significant difference to
the ease of porting, annotations would need to have been introduced early
enough that lots of Python 2 library code used them so that porting code
didn t have to be quite so much of an exercise of manually figuring out the
exact nature of string types from context.
Launchpad is a complex piece of software that interacts with multiple
domains: for example, it deals with a database, HTTP, web page rendering,
Debian-format archive publishing, and multiple revision control systems, and
there s often overlap between domains. Each of these tends to imply
different kinds of string handling. Web page rendering is normally done
mainly in Unicode, converting to bytes as late as possible; revision control
systems normally want to spend most of their time working with bytes,
although the exact details vary; HTTP is of course bytes on the wire, but
Python s WSGI interface has some string type
subtleties.
In practice I found myself thinking about at least four string-like types
(that is, things that in a language with a stricter type system I might well
want to define as distinct types and restrict conversion between them):
bytes, text, ordinary native strings (str
in either language, encoded to
UTF-8 in Python 2), and native strings with WSGI s encoding rules. Some of
these are emergent properties of writing in the intersection of Python 2 and
3, which is effectively a specialized language of its own without coherent
official documentation whose users must intuit its behaviour by comparing
multiple sources of information, or by referring to unofficial porting
guides: not a very satisfactory situation. Fortunately much of the
complexity collapses once it becomes possible to write solely in Python 3.
Some of the difficulties we ran into are not ones that are typically thought
of as Python 2-to-3 porting issues, because they were changed later in
Python 3 s development process. For instance, the email
module was
substantially improved in around the 3.2/3.3 timeframe to handle Python 3 s
bytes/text model more correctly, and since Launchpad sends quite a few
different kinds of email messages and has some quite picky tests for exactly
what it emits, this entailed a lot of work in our email sending code and in
our test suite to account for that. (It took me a while to work out whether
we should be treating raw email messages as bytes or as text; bytes turned
out to work best.) 3.4 made some tweaks to the implementation of
quoted-printable encoding that broke a number of our tests in ways that took
some effort to fix, because the tests needed to work on both 2.7 and 3.5.
The list goes on. I got quite proficient at digging through Python s git
history to figure out when and why some particular bit of behaviour had changed.
One of the thorniest problems was parsing HTTP form data. We mainly rely on
zope.publisher
for this, which
in turn relied on
cgi.FieldStorage
; but
cgi.FieldStorage
is badly broken in some
situations on Python 3. Even if that
bug were fixed in a more recent version of Python, we can t easily use
anything newer than 3.5 for the first stage of our port due to the version
of the base OS we re currently running, so it wouldn t help much. In the
end I fixed some minor issues in the
multipart
module (and was kindly
given co-maintenance of it) and converted zope.publisher
to use
it. Although
this took a while to sort out, it seems to have gone very well.
A couple of other interesting late-arriving issues were around
pickle
. For most things
we normally prefer safer formats such as JSON, but there are a few cases
where we use pickle, particularly for our session databases. One of my
colleagues pointed out that I needed to remember to tell pickle
to stick
to protocol
2,
so that we d be able to switch back and forward between Python 2 and 3 for a
while; quite right, and we later ran into a similar problem with
marshal
too. A more
surprising problem was that datetime.datetime
objects pickled on Python 2
require special care when unpickling
on Python 3; rather than the approach that ended up being implemented and
documented
for Python 3.6, though, I preferred a custom
unpickler,
both so that things would work on Python 3.5 and so that I wouldn t have to
risk affecting the decoding of other pickled strings in the session database.
General lessons
Writing this over a year after Python 2 s end-of-life date, and certainly
nowhere near the leading edge of Python 3 porting work, it s perhaps more
useful to look at this in terms of the lessons it has for other large
technical debt projects.
I mentioned in my previous article that
I used the approach of an enormous and frequently-rebased git branch as a
working area for the port, committing often and sometimes combining and
extracting commits for review once they seemed to be ready. A port of this
scale would have been entirely intractable without a tool of similar power
to git rebase
, so I m very glad that we finished migrating to git in 2019.
I relied on this right up to the end of the port, and it also allowed for
quick assessments of how much more there was to land. git
worktree was also helpful, in that I
could easily maintain working trees built for each of Python 2 and 3 for comparison.
As is usual for most multi-developer projects, all changes to Launchpad need
to go through code review, although we sometimes make exceptions for very
simple and obvious changes that can be self-reviewed. Since I knew from the
outset that this was going to generate a lot of changes for review, I
therefore structured my work from the outset to try to make it as easy as
possible for my colleagues to review it. This generally involved keeping
most changes to a somewhat manageable size of 800 lines or less (although
this wasn t always possible), and arranging commits mainly according to the
kind of change they made rather than their location. For example, when I
needed to fix issues with /
in Python 3 being true division rather than
floor division, I did so in one
commit
across the various places where it mattered and took care not to mix it with
other unrelated changes. This is good practice for nearly any kind of
development, but it was especially important here since it allowed reviewers
to consider a clear explanation of what I was doing in the commit message
and then skim-read the rest of it much more quickly.
It was vital to keep the codebase in a working state at all times, and
deploy to production reasonably often: this way if something went wrong the
amount of code we had to debug to figure out what had happened was always
tractable. (Although I can t seem to find it now to link to it, I saw an
account a while back of a company that had taken a flag-day approach instead
with a large codebase. It seemed to work for them, but I m certain we
couldn t have made it work for Launchpad.)
I can t speak too highly of Launchpad s test suite, much of which originated
before my time. Without a great deal of extensive coverage of all sorts of
interesting edge cases at both the unit and functional level, and a
corresponding culture of maintaining that test suite well when making new
changes, it would have been impossible to be anything like as confident of
the port as we were.
As part of the porting work, we split out a couple of substantial chunks of
the Launchpad codebase that could easily be decoupled from the core: its
Mailman integration and its code import
worker. Both of these had substantial
dependencies with complex requirements for porting to Python 3, and
arranging to be able to do these separately on their own schedule was
absolutely worth it. Like disentangling balls of wool, any opportunity you
can take to make things less tightly-coupled is probably going to make it
easier to disentangle the rest. (I can see a tractable way forward to
porting the code import worker, so we may well get that done soon. Our
Mailman integration will need to be rewritten, though, since it currently
depends on the Python-2-only Mailman 2, and Mailman 3 has a different architecture.)
Python lessons
Our database layer was already in pretty good
shape for a port, since at least the modern bits of its table modelling
interface were already strict about using Unicode for text columns. If you
have any kind of pervasive low-level framework like this, then making it be
pedantic at you in advance of a Python 3 port will probably incur much less
swearing in the long run, as you won t be trying to deal with quite so many
bytes/text issues at the same time as everything else.
Early in our port, we established a standard set of
__future__
imports
and started incrementally converting files over to them, mainly because we
weren t yet sure what else to do and it seemed likely to be helpful.
absolute_import
was definitely reasonable (and not often a problem in our
code), and print_function
was annoying but necessary. In hindsight I m
not sure about unicode_literals
, though. For files that only deal with
bytes and text it was reasonable enough, but as I mentioned above there were
also a number of cases where we needed literals of the language s native
str
type, i.e. bytes in Python 2 and text in Python 3: this was
particularly noticeable in WSGI contexts, but also cropped up in some other
surprising
places. We
generally either omitted unicode_literals
or used six.ensure_str
in such
cases, but it was definitely a bit awkward and maybe I should have listened
more to people telling me it might be a bad idea.
A lot of Launchpad s early tests used
doctest, mainly in the
style
where you have text files that interleave narrative commentary with
examples. The development team later reached consensus that this was best
avoided in most cases, but by then there were far too many doctests to
conveniently rewrite in some other form. Porting doctests to Python 3 is
really annoying. You run into all the little changes in how objects are
represented as text (particularly u'...'
versus '...'
, but plenty of
other cases as well); you have next to no tools to do anything useful like
skipping individual bits of a doctest that don t apply; using __future__
imports requires the rather obscure approach of adding the relevant names to
the doctest s globals in the relevant DocFileSuite
or DocTestSuite
;
dealing with many exception tracebacks requires something like
zope.testing.renormalizing
;
and whatever code refactoring tools you re using probably don t work
properly. Basically, don t have done that. It did all turn out to be
tractable for us in the end, and I managed to avoid using much in the way of
fragile doctest extensions aside from the aforementioned
zope.testing.renormalizing
, but it was not an enjoyable experience.
Regressions
I know of nine regressions that reached Launchpad s production systems as a
result of this porting work; of course there were various other regressions
caught by CI or in manual testing. (Considering the size of this project, I
count it as a resounding success that there were only nine production
issues, and that for the most part we were able to fix them quickly.)
Equality testing of removed database objects
One of the things we had to do while porting to Python 3 was to
implement
the __eq__
, __ne__
, and __hash__
special methods for all our database
objects. This was quite conceptually fiddly, because doing this requires
knowing each object s primary key, and that may not yet be available if
we ve created an object in Python but not yet flushed the actual INSERT
statement to the database (most of our primary keys are auto-incrementing
sequences). We thus had to take care to flush pending SQL statements in
such cases in order to ensure that we know the primary keys.
However, it s possible to have a problem at the other end of the object
lifecycle: that is, a Python object might still be reachable in memory even
though the underlying row has been DELETE
d from the database. In most
cases we don t keep removed objects around for obvious reasons, but it can
happen in caching code, and buildd-manager
crashed as a result (in
fact while it was still running on Python 2). We had to take extra
care
to avoid this problem.
Debian imports crashed on non-UTF-8 filenames
Python 2 has some unfortunate
behaviour around passing
bytes or Unicode strings (depending on the platform) to shutil.rmtree
, and
the combination of some porting
work
and a particular source package in Debian that contained a non-UTF-8 file
name caused us to run into this. The
fix
was to ensure that the argument passed to shutil.rmtree
is a str
regardless of Python version.
We d actually run into something
similar
before: it s a subtle porting gotcha, since it s quite easy to end up
passing Unicode strings to shutil.rmtree
if you re in the process of
porting your code to Python 3, and you might easily not notice if the file
names in your tests are all encoded using UTF-8.
lazr.restful ETags
We eventually got far enough along that we could switch one of our four
appserver machines (we have quite a number of other machines too, but the
appservers handle web and API requests) to Python 3 and see what happened.
By this point our extensive test suite had shaken out the vast majority of
the things that could go wrong, but there was always going to be room for
some interesting edge cases.
One of the Ubuntu kernel team reported that they were seeing an increase in
412 Precondition
Failed errors in some
of their scripts that use our webservice API. These can happen when you re
trying to modify an existing resource: the underlying protocol involves
sending an If-Match
header with the ETag
that the client thinks the
resource has, and if this doesn t match the ETag
that the server calculates
for the resource then the client has to refresh its copy of the resource and
try again. We initially thought that this might be legitimate since it can
happen in normal operation if you collide with another client making changes
to the same resource, but it soon became clear that something stranger was
going on: we were getting inconsistent ETag
s for the same object even when
it was unchanged. Since we d recently switched a quarter of our appservers
to Python 3, that was a natural suspect.
Our lazr.restful
package provides the framework for our webservice API,
and roughly speaking it generates ETag
s by serializing objects into some
kind of canonical form and hashing the result. Unfortunately the
serialization was dependent on the Python version in a few ways, and in
particular it serialized lists of strings such as lists of bug tags
differently: Python 2 used [u'foo', u'bar', u'baz']
where Python 3 used
['foo', 'bar', 'baz']
. In lazr.restful
1.0.3 we switched to using
JSON
for this, removing the Python version dependency and ensuring consistent
behaviour between appservers.
Memory leaks
This problem took the longest to solve. We noticed fairly quickly from our
graphs that the appserver machine we d switched to Python 3 had a serious
memory leak. Our appservers had always been a bit leaky, but now it wasn t
so much a small hole that we can bail occasionally as the boat is sinking rapidly :
(Yes, this got in the way of working out what was going on with ETag
s for
a while.)
I spent ages messing around with various attempts to fix this. Since only
a quarter of our appservers were affected, and we could get by on 75%
capacity for a while, it wasn t urgent but it was definitely annoying.
After spending some quality time with
objgraph, for
some time I thought traceback reference
cycles
might be at fault, and I sent a number of fixes to various upstream projects
for those (e.g.
zope.pagetemplate).
Those didn t help the leaks much though, and after a while it became clear
to me that this couldn t be the sole problem: Python has a cyclic garbage
collector that will eventually collect reference cycles as long as there are
no strong references to any objects in them, although it might not happen
very quickly. Something else must be going on.
Debugging reference leaks in any non-trivial and long-running Python program
is extremely arduous, especially with ORMs that naturally tend to end up
with lots of cycles and caches. After a while I formed a hypothesis that
zope.server might be keeping a
strong reference to something, although I never managed to nail it down more
firmly than that. This was an attractive theory as we were already in the
process of migrating to Gunicorn for
other reasons anyway, and Gunicorn also has a convenient
max_requests
setting that s good at mitigating memory leaks. Getting this all in place
took some time, but once we did we found that everything was much more stable:
This isn t completely satisfying as we never quite got to the bottom of the
leak itself, and it s entirely possible that we ve only papered over it
using max_requests
: I expect we ll gradually back off on how frequently we
restart workers over time to try to track this down. However,
pragmatically, it s no longer an operational concern.
Mirror prober HTTPS proxy handling
After we switched our script servers to Python 3, we had several reports of
mirror probing
failures. (Launchpad
keeps lists of Ubuntu archive and image mirrors, and probes them every so
often to check that they re reasonably complete and up to date.) This only
affected HTTPS mirrors when probed via a proxy server, support for which is
a relatively recent feature in Launchpad and involved some code that we
never managed to unit-test properly: of course this is exactly the code that
went wrong. Sadly I wasn t able to sort out that gap, but at least the
fix
was simple.
Non-MIME-encoded email headers
As I mentioned above, there were substantial changes in the email
package
between Python 2 and 3, and indeed between minor versions of Python 3. Our
test coverage here is pretty good, but it s an area where it s very easy to
have gaps. We noticed that a script that processes incoming email was
crashing on messages with headers that were non-ASCII but not
MIME-encoded (and
indeed then crashing again when it tried to send a notification of the
crash!). The only examples of these I looked at were spam, but we still
didn t want to crash on them.
The
fix
involved being somewhat more careful about both the handling of headers
returned by Python s email parser and the building of outgoing email
notifications. This seems to be working well so far, although I wouldn t be
surprised to find the odd other incorrect detail in this sort of area.
Failure to handle non-ISO-8859-1 URL-encoded form input
Remember how I said that parsing HTTP form data was thorny? After we
finished upgrading all our appservers to Python 3, people started reporting
that they couldn t post Unicode comments to
bugs, which turned out
to be only if the attempt was made using JavaScript, and was because I
hadn t quite managed to get URL-encoded form data working properly with
zope.publisher
and multipart
. The current standard describes the
URL-encoded format for form data as in many ways an aberrant
monstrosity ,
so this was no great surprise.
Part of the problem was some very strange
choices in
zope.publisher
dating back to 2004 or earlier, which I attempted to clean
up and simplify.
The rest was that Python 2 s urlparse.parse_qs
unconditionally decodes
percent-encoded sequences as ISO-8859-1 if they re passed in as part of a
Unicode string, so multipart
needs to work around
this on Python 2.
I m still not completely confident that this is correct in all situations,
but at least now that we re on Python 3 everywhere the matrix of cases we
need to care about is smaller.
Inconsistent marshalling of Loggerhead s disk cache
We use Loggerhead for providing web
browsing of Bazaar branches. When we upgraded one of its two servers to
Python 3, we immediately noticed that the one still on Python 2 was failing
to read back its revision information cache, which it stores in a database
on disk. (We noticed this because it caused a deployment to fail: when we
tried to roll out new code to the instance still on Python 2, Nagios checks
had already caused an incompatible cache to be written for one branch from
the Python 3 instance.)
This turned out to be a similar problem to the pickle
issue mentioned
above, except this one was with marshal
, which I didn t think to look for
because it s a relatively obscure module mostly used for internal purposes
by Python itself; I m not sure that Loggerhead should really be using it in
the first place. The fix was
relatively
straightforward,
complicated mainly by now needing to cope with throwing away unreadable
cache data.
Ironically, if we d just gone ahead and taken the nominally riskier path of
upgrading both servers at the same time, we might never have had a problem here.
Intermittent bzr failures
Finally, after we upgraded one of our two Bazaar codehosting servers to
Python 3, we had a
report of intermittent
bzr branch
hangs. After some digging I found this in our logs:
Traceback (most recent call last):
...
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/conch/ssh/channel.py", line 136, in addWindowBytes
self.startWriting()
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/lazr/sshserver/session.py", line 88, in startWriting
resumeProducing()
File "/srv/bazaar.launchpad.net/production/codehosting1-rev-20124175fa98fcb4b43973265a1561174418f4bd/env/lib/python3.5/site-packages/twisted/internet/process.py", line 894, in resumeProducing
for p in self.pipes.itervalues():
builtins.AttributeError: 'dict' object has no attribute 'itervalues'
It is with great pleasure that we can announce a new release of nikita. Version 0.6 (https://gitlab.com/OsloMet-ABI/nikita-noark5-core). This release makes new record keeping functionality available. This really is a maturity release. Both in terms of functionality but also code. Considerable effort has gone into refactoring the codebase and simplifying the code. Notable changes for this release include:If free and open standardized archiving API sound interesting to you, please contact us on IRC (#nikita on irc.oftc.net) or email (nikita-noark mailing list). As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.We are currently in the process of reaching an agreement with an archive institution to publish their picture archive using nikita with business specific metadata and we hope that we can share this with you soon. This is an interesting project as it allows the organisation to bring an older picture archive back to life while using the original metadata values stored as business specific metadata. Combined with OData means the scope and use of the archive is significantly increased and will showcase both the flexibility and power of Noark. I really think we are approaching a version 1.0 of nikita, even though there is still a lot of work to be done. The notable work at the moment is to implement access-control and full text indexing of documents. My sincere thanks to everyone who has contributed to this release! - Thomas Release 0.6 2021-06-10 (d1ba5fc7e8bad0cfdce45ac20354b19d10ebbc7b)
- Significantly improved OData parsing
- Support for business specific metadata and national identifiers
- Continued implementation of domain model and endpoints
- Improved testing
- Ability to export and import from arkivstruktur.xml
- Refactor metadata entity search
- Remove redundant security configuration
- Make OpenAPI documentation work
- Change database structure / inheritance model to a more sensible approach
- Make it possible to move entities around the fonds structure
- Implemented a number of missing endpoints
- Make sure yml files are in sync
- Implemented/finalised storing and use of
- Business Specific Metadata
- Norwegian National Identifiers
- Cross Reference
- Keyword
- StorageLocation
- Author
- Screening for relevant objects
- ChangeLog
- EventLog
- Make generation of updated docker image part of successful CI pipeline
- Implement pagination for all list requests
- Refactor code to support lists
- Refactor code for readability
- Standardise the controller/service code
- Finalise File->CaseFile expansion and Record->registryEntry/recordNote expansion
- Improved Continuous Integration (CI) approach via gitlab
- Changed conversion approach to generate tagged PDF documents
- Updated dependencies
- For security reasons
- Brought codebase to spring-boot version 2.5.0
- Remove import of necessary dependencies
- Remove non-used metrics classes
- Added new analysis to CI including
- Implemented storing of Keyword
- Implemented storing of Screening and ScreeningMetadata
- Improved OData support
- Better support for inheritance in queries where applicable
- Brought in more OData tests
- Improved OData/hibernate understanding of queries
- Implement $count, $orderby
- Finalise $top and $skip
- Make sure & is used between query parameters
- Improved Testing in codebase
- A new approach for integration tests to make test more readable
- Introduce tests in parallel with code development for TDD approach
- Remove test that required particular access to storage
- Implement case-handling process from received email to case-handler
- Develop required GUI elements (digital postroom from email)
- Introduced leader, quality control and postroom roles
- Make PUT requests return 200 OK not 201 CREATED
- Make DELETE requests return 204 NO CONTENT not 200 OK
- Replaced 'oppdatert*' with 'endret*' everywhere to match latest spec
- Upgrade Gitlab CI to use python > 3 for CI scripts
- Bug fixes
- Fix missing ALLOW
- Fix reading of objects from jar file during start-up
- Reduce the number of warnings in the codebase
- Fix delete problems
- Make better use of cascade for "leaf" objects
- Add missing annotations where relevant
- Remove the use of ETAG for delete
- Fix missing/wrong/broken rels discovered by runtest
- Drop unofficial convertFil (konverterFil) end point
- Fix regex problem for dateTime
- Fix multiple static analysis issues discovered by coverity
- Fix proxy problem when looking for object class names
- Add many missing translated Norwegian to English (internal) attribute/entity names
- Change UUID generation approach to allow code also set a value
- Fix problem with Part/PartParson
- Fix problem with empty OData search results
- Fix metadata entity domain problem
- General Improvements
- Makes future refactoring easier as coupling is reduced
- Allow some constant variables to be set from property file
- Refactor code to make reflection work better across codebase
- Reduce the number of @Service layer classes used in @Controller classes
- Be more consistent on naming of similar variable types
- Start printing rels/href if they are applicable
- Cleaner / standardised approach to deleting objects
- Avoid concatenation when using StringBuilder
- Consolidate code to avoid duplication
- Tidy formatting for a more consistent reading style across similar class files
- Make throw a log.error message not an log.info message
- Make throw print the log value rather than printing in multiple places
- Add some missing pronom codes
- Fix time formatting issue in Gitlab CI
- Remove stale / unused code
- Use only UUID datatype rather than combination String/UUID for systemID
- Mark variables final and @NotNull where relevant to indicate intention
- Change Date values to DateTime to maintain compliance with Noark 5 standard
- Domain model improvements using Hypersistence Optimizer
- Move @Transactional from class to methods to avoid borrowing the JDBC Connection unnecessarily
- Fix OneToOne performance issues
- Fix ManyToMany performance issues
- Add missing bidirectional synchronization support
- Fix ManyToMany performance issue
- Make List
- and Set
- use final-keyword to avoid potential problems during update operations
- Changed internal URLs, replaced "hateoas-api" with "api".
- Implemented storing of Precedence.
- Corrected handling of screening.
- Corrected _links collection returned for list of mixed entity types to match the specific entity.
- Improved several internal structures.
Series: | Lady Astronaut #3 |
Publisher: | Tor |
Copyright: | 2020 |
ISBN: | 1-250-23648-7 |
Format: | Kindle |
Pages: | 542 |
I wrote this blog post with Kaylea Champion and a version of this post was originally posted on the Community Data Science Collective blog. Critical software we all rely on can silently crumble away beneath us. Unfortunately, we often don t find out software infrastructure is in poor condition until it is too late. Over the last year or so, I have been supporting Kaylea Champion on a project my group announced earlier to measure software underproduction a term we use to describe software that is low in quality but high in importance. Underproduction reflects an important type of risk in widely used free/libre open source software (FLOSS) because participants often choose their own projects and tasks. Because FLOSS contributors work as volunteers and choose what they work on, important projects aren t always the ones to which FLOSS developers devote the most attention. Even when developers want to work on important projects, relative neglect among important projects is often difficult for FLOSS contributors to see. Given all this, what can we do to detect problems in FLOSS infrastructure before major failures occur? Kaylea Champion and I recently published a paper laying out our new method for measuring underproduction at the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2021 that we believe provides one important answer to this question.
In the paper, we describe a general approach for detecting underproduced software infrastructure that consists of five steps: (1) identifying a body of digital infrastructure (like a code repository); (2) identifying a measure of quality (like the time to takes to fix bugs); (3) identifying a measure of importance (like install base); (4) specifying a hypothesized relationship linking quality and importance if quality and importance are in perfect alignment; and (5) quantifying deviation from this theoretical baseline to find relative underproduction. To show how our method works in practice, we applied the technique to an important collection of FLOSS infrastructure: 21,902 packages in the Debian GNU/Linux distribution. Although there are many ways to measure quality, we used a measure of how quickly Debian maintainers have historically dealt with 461,656 bugs that have been filed over the last three decades. To measure importance, we used data from Debian s Popularity Contest opt-in survey. After some statistical machinations that are documented in our paper, the result was an estimate of relative underproduction for the 21,902 packages in Debian we looked at. One of our key findings is that underproduction is very common in Debian. By our estimates, at least 4,327 packages in Debian are underproduced. As you can see in the list of the most underproduced packages again, as estimated using just one more measure many of the most at risk packages are associated with the desktop and windowing environments where there are many users but also many extremely tricky integration-related bugs. We hope these results are useful to folks at Debian and the Debian QA team. We also hope that the basic method we ve laid out is something that others will build off in other contexts and apply to other software repositories. In addition to the paper itself and the video of the conference presentation on Youtube by Kaylea, we ve put a repository with all our code and data in an archival repository Harvard Dataverse and we d love to work with others interested in applying our approach in other software ecosytems.For more details, check out the full paper which is available as a freely accessible preprint.
This project was supported by the Ford/Sloan Digital Infrastructure Initiative. Wm Salt Hale of the Community Data Science Collective and Debian Developers Paul Wise and Don Armstrong provided valuable assistance in accessing and interpreting Debian bug data. Ren Just generously provided insight and feedback on the manuscript.
Paper Citation: Kaylea Champion and Benjamin Mako Hill. 2021. Underproduction: An Approach for Measuring Risk in Open Source Software. In Proceedings of the IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2021). IEEE.
Contact Kaylea Champion (kaylea@uw.edu) with any questions or if you are interested in following up.
Series: | Lady Astronaut #2 |
Publisher: | Tor |
Copyright: | August 2018 |
ISBN: | 0-7653-9893-1 |
Format: | Kindle |
Pages: | 380 |
#!/usr/bin/perl use strict; use Sys::Syslog; # St Kilda Harbour RMYS # http://www.bom.gov.au/products/IDV60901/IDV60901.95864.shtml my $URL = $ARGV[0]; open(IN, "wget -o /dev/null -O - $URL ") or die "Can't get $URL"; while(<IN>) if($_ =~ /tr class=.rowleftcolumn/) last; sub get_data if(not $_[0] =~ /headers=.t1-$_[1]/) return undef; $_[0] =~ s/^.*headers=.t1-$_[1]..//; $_[0] =~ s/<.td.*$//; return $_[0]; my @datetime; my $cur_temp -100; while(<IN>) chomp; if($_ =~ /^<.tr>$/) last; my $res; if($res = get_data($_, "datetime")) @datetime = split(/\//, $res) elsif($res = get_data($_, "tmp")) $cur_temp = $res; close(IN); if($#datetime != 1 or $cur_temp == -100) die "Can't parse BOM data"; my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(); if($mday - $datetime[0] > 1 or ($datetime[0] > $mday and $mday != 1)) die "Date wrong\n"; my $mins; my @timearr = split(/:/, $datetime[1]); $mins = $timearr[0] * 60 + $timearr [1]; if($timearr[1] =~ /pm/) $mins += 720; if($mday != $datetime[0]) $mins += 1440; if($mins + 60 < $hour * 60 + $min) die "page outdated\n"; my %temp_hash; foreach ( @ARGV[1..$#ARGV] ) my @tmparr = split(/:/, $_); $temp_hash $tmparr[0] = $tmparr[1]; my @temp_list = sort(keys(%temp_hash)); my $percent = 0; my $i; for($i = $#temp_list; $i >= 0 and $temp_list[$i] > $cur_temp; $i--) $percent = $temp_hash $temp_list[$i] my $prefs = "/etc/boinc-client/global_prefs_override.xml"; open(IN, "<$prefs") or die "Can't read $prefs"; my @prefs_contents; while(<IN>) push(@prefs_contents, $_); close(IN); openlog("boincmgr-cron", "", "daemon"); my @cpus_pct = grep(/max_ncpus_pct/, @prefs_contents); my $cpus_line = $cpus_pct[0]; $cpus_line =~ s/..max_ncpus_pct.$//; $cpus_line =~ s/^.*max_ncpus_pct.//; if($cpus_line == $percent) syslog("info", "Temp $cur_temp" . "C, already set to $percent"); exit 0; open(OUT, ">$prefs.new") or die "Can't read $prefs.new"; for($i = 0; $i <= $#prefs_contents; $i++) if($prefs_contents[$i] =~ /max_ncpus_pct/) print OUT " <max_ncpus_pct>$percent.000000</max_ncpus_pct>\n"; else print OUT $prefs_contents[$i]; close(OUT); rename "$prefs.new", "$prefs" or die "can't rename"; system("boinccmd --read_global_prefs_override"); syslog("info", "Temp $cur_temp" . "C, set percentage to $percent");
cfingerd
(#831021), grap
(#870573), splint
(#924003) & schroot
(#902804)
check-pgbackrest
, cif2cell
, evince
, haskell-haskell-gi-base
, libiio
, jhbuild
, smartdns
& smartlist
.
159
and 160
to Debian:
strings(1)
output by applying the ordering check to all differences across the codebase. [...]pgpdump
, and check that the associated binary is actually installed before attempting to run it. (#969753)guestfs
cleanup failure. [...]FALLBACK_FILE_EXTENSION_SUFFIX
, otherwise we run pgpdump
against all files that are recognised by file(1)
as data
. [...]2.93.0
, 2.94.0
, 2.95.0
& 2.96.0
(not counting uploads to the backports repositories), as well as:
odd-mark-in-description
for large numbers such as 300,000
. (#969528)Vcs- Browser,Git
location of modules and applications maintained by recently-merged Python module/app teams. (#970743)dh(1)
sequencer by not looking for the preceding target:
. (#970920)debian/patches/series
if it does not exist. [...]$LINTIAN_VERSION
assignments in scripts and not just the ones we specify; we had added and removed some during development. [...]Vcs-*
the vcs-field-not-canonical
tag is being emitted for and update its long description to remove misleading messages. (#970201)odd-mark-in-description
. [...]odd-mark-in-description
tag. [...]debian/changelog
manually. [...]wrap-and-sort -sa
to the debian
subdirectory. [...]data/README
into CONTRIBUTING.md
for greater visibility [...] and move CONTRIBUTING.md
to use #
-style Markdown headers [...].golang-1.7
, inspircd
, kleopatra
, libdbi-perl
, open-build-service
, openssl
, openssl1.0
, python-django.
, tinymce
, yaws
& zeromq3
, amongst others.
grunt
to fix an arbitrary code execution
vulnerability due to the unsafe loading of YAML documents.
python-pip
Python package installer to fix a directory traversal attack where arbitrary local files (eg. /root/.ssh/authorized_keys
) could be overridden.
libproxy
, a library to make applications HTTP proxy-aware.
gnome-shell
component of the GNOME desktop. In certain configurations, when logging out of an account, the password box from the login dialog could reappear with the password visible in plaintext.
ruby-gon
, a library to send/convert data to
Javascript from Ruby applications, to prevent cross-site scripting (XSS) vulnerabilities.
2.2.16-1
& 3.1.1-1
) New upstream security releases.
1.6.7+dfsg-1
New upstream release.1.6.7+dfsg-2
& 1.6.7+dfsg-3
Run tests in single CPU mode. (#968603)6.0.7-1
& 6.0.8-1
) New upstream releases & miscellaneous packaging updates.
2.0.0-45
) Ensure shunit2
autopkgtest dependency is cross-architecture friendly. (#969604)
bluez-source
: Contains bluez-source-tmp
directory. (#970130)
bookworm
: Manual page contains debugging/warning/error information from running binary. (#970277)
jhbuild
: Missing runtime dependency on python3-distutils
. (#971418)
wxwidgets3.0
: Links in documentation points to within the original build path, not the installed path. (#970431)
VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=mapped-xattr,id=zz,writeout=immediate,fmode=0600,dmode=0700,mount_tag=zz" VIRTFS="-virtfs local,path=/vmstore/virtfs,security_model=passthrough,id=zz,writeout=immediate,mount_tag=zz"Above are the 2 configuration snippets I tried on the server side. The first uses mapped xattrs (which means that all files will have the same UID/GID and on the host XATTRs will be used for storing the Unix permissions) and the second uses passthrough which requires KVM to run as root and gives the same permissions on the host as on the VM. The advantages of passthrough are better performance through writing less metadata and having the same permissions in host and VM. The advantages of mapped XATTRs are running KVM/Qemu as non-root and not having a SUID file in the VM imply a SUID file in the host. Here is the link to Bonnie++ output comparing Ext3 on a KVM block device (stored on a regular file in a BTRFS RAID-1 filesystem on 2 SSDs on the host), a NFS share from the host from the same BTRFS filesystem, and virtfs shares of the same filesystem. The only tests that Ext3 doesn t win are some of the latency tests, latency is based on the worst-case not the average. I expected Ext3 to win most tests, but didn t expect it to lose any latency tests. Here is a link to Bonnie++ output comparing just NFS and Virtfs. It s obvious that Virtfs compares poorly, giving about half the performance on many tests. Surprisingly the only tests where Virtfs compared well to NFS were the file creation tests which I expected Virtfs with mapped XATTRs to do poorly due to the extra metadata. Here is a link to Bonnie++ output comparing only Virtfs. The options are mapped XATTRs with default msize, mapped XATTRs with 512k msize (I don t know if this made a difference, the results are within the range of random differences), and passthrough. There s an obvious performance benefit in passthrough for the small file tests due to the less metadata overhead, but as creating small files isn t a bottleneck on most systems a 20% to 30% improvement in that area probably doesn t matter much. The result from the random seeks test in passthrough is unusual, I ll have to do more testing on that. SE Linux On Virtfs the XATTR used for SE Linux labels is passed through to the host. So every label used in a VM has to be valid on the host and accessible to the context of the KVM/Qemu process. That s not really an option so you have to use the context mount option. Having the mapped XATTR mode work for SE Linux labels is a necessary feature. Conclusion The msize mount option in the VM doesn t appear to do anything and it doesn t appear in /proc/mounts, I don t know if it s even supported in the kernel I m using. The passthrough and mapped XATTR modes give near enough performance that there doesn t seem to be a benefit of one over the other. NFS gives significant performance benefits over Virtfs while also using less CPU time in the VM. It has the issue of files named .nfs* hanging around if the VM crashes while programs were using deleted files. It s also more well known, ask for help with an NFS problem and you are more likely to get advice than when asking for help with a virtfs problem. Virtfs might be a better option for accessing databases than NFS due to it s internal operation probably being a better map to Unix filesystem semantics, but running database servers on the host is probably a better choice anyway. Virtfs generally doesn t seem to be worth using. I had hoped for performance that was better than NFS but the only benefit I seemed to get was avoiding the .nfs* file issue. The best options for storage for a KVM/Qemu VM seem to be Ext3 for files that are only used on one VM and for which the size won t change suddenly or unexpectedly (particularly the root filesystem) and NFS for everything else.
Publisher: | Silvertail |
Copyright: | 2014-2016 |
Printing: | 2016 |
ISBN: | 1-909269-42-5 |
Format: | Kindle |
Pages: | 308 |
Every damaging resignation letter, every cornered truth attack, every out-of-control speech by a former friend, is more than just an inconvenience, to be countered with positive spin and internal memos: it's an open challenge to the official version of the story, the perfectly controlled brand. They are breaks in an otherwise perfectly planned, smoothly executed narrative of the powerful. Holes in the program code. A rare, irresistible chance to hack into history's shiny, unstoppable operation.The Last Goodbye: A History of the World in Resignation Letters is not, in truth, a history of the world. It is, first and foremost, a taxonomy, because there are types of resignation letters. The opening chapter, the truth bomb, is the type that one would expect upon discovering that someone wrote a book on the topic (that wasn't advice on how to write a good one). But there are other types, less heavy on the fireworks but just as fascinating. The unquotable expert construction. The knife in the back. The incoherent scream of rage. But also the surprisingly gentle and graceful conclusion.
It is the question that the letters themselves try in vain to answer, over and over again even as they explain, analyse, protest and bear witness to a million other details. The question is: Why? All the forces in the universe stack up against unburdening ourselves in a resignation letter. Professionally, it can be suicide. In practical terms, it is often self-defeating. Self-help books coach against unleashing its force; colleagues and confidantes urge caution, self-restraint. And yet we do it, and damn the consequences. We have no choice but to speak in sorrow, love, grief, cold anger, thirst for revenge, wounded pride, the pain of injustice, loyalty, pangs of regret, throes of vengeful madness, deluded righteousness, panic, black distress, isolation, ecstasies of martyrdom, and a million other shades of human extremity we need to say our piece even as we leave the stage.The risk of the enthusiast's book is that the lack of structural grounding can leave the conclusions unsupported. A fair critique of this book is that it contains a lot of amateur sociology. Potter has a journalist's eye for motive and narrative, but some of his conclusions may not be warranted. But he compensates for the lack of rigor with, well, enthusiasm. Potter is fascinated by resignation letters and the insight they offer, and that fascination is irresistibly contagious. It's probably obvious that the chapters on truth bombs, fuck yous, and knives in the back have visceral appeal. The resignation letter as a force of truth-telling, as the revenge of a disregarded peon, as a crack in the alliance between the powerful that may let some truth shine through, is what got me to buy this book. And Potter doesn't disappoint; he quotes from both famous and nearly unknown examples, dissects the writing and the intent, and gives us a ringside seat to a few effective acts of revenge. That's not the best part of this book, though. The chapter that I will remember the longest is Potter's dissection of the constructed resignation letter. The carefully drafted public relations statement, the bland formality, the attempt to make a news story disappear. The conversation-ender. It's a truism that any area of human endeavor involves more expertise than those who have only observed it from the outside will realize, but I had never thought to apply that principle to the ghost-written resignation letter. The careful non-apology, the declaration that one has "become a distraction," the tell-tale phrasing of "spending more time with my family" is offensive in its bland dishonesty. But Potter shows that the blandness is expertly constructed to destroy quotability. Those statements are nearly impossible to remember or report on because they have been built that way: nouns carefully separated from verbs, all force dissipated by circuities and modifiers, and subtle grammatical errors introduced to discourage written news from including direct quotes. Potter's journalism background shines here because he can describe the effect on news reporting. He walks the reader through the construction to reveal that the writing is not incompetent but instead is skillfully bad in a way that causes one's attention to skitter off of it. The letter vanishes into its own vagueness. The goal is to smother the story in such mediocrity that it becomes forgettable. And it works. I've written several resignation letters of my own. Somewhat unusually, I've even sent several of them, although (as Potter says is typical) fewer than I've written. I've even written (and sent) the sort of resignation letter that every career advisor will say to never send. Potter's discussion of the motives and thought process behind those letters rang true for me. It's a raw and very human moment, one is never in as much control of it as one wishes, the cracks and emotions break through in the writing, and often those letters are trying to do far too many things at the same time. But it's also a moment in which one can say something important and have others listen, which can be weirdly challenging to do in the normal course of a job. Potter ends this book beautifully by looking at resignation letters that break or transcend the mold of most of the others he's examined: letters where the author seems to have found some peace and internal understanding and expresses that simply and straightforwardly. I found this surprisingly touching after the emotional roller-coaster of the rest of the book, and a lovely note on which to end. This is a great book. Potter has a good eye for language and the emotion encoded in it, a bracing preference for the common worker or employee over the manager or politician, and the skill to produce some memorable turns of phrase. Most importantly, he has the enthusiast's love of the topic. Even if you don't care about resignation letters going in, it will be hard to avoid some fascination with them by the end of this book. Recommended. This book was originally published as F*ck You and Goodbye in the UK. Rating: 8 out of 10
Silicon Valley s elite are hatching plans to escape disaster and when it comes, they ll leave the rest of us behind
Heteronomy refers to action that is influenced by a force outside the individual, in other words the state or condition of being ruled, governed, or under the sway of another, as in a military occupation.
Poster P590CW $9.00 Early Warning Signs Of Fascism Laurence W. Britt wrote about the common signs of fascism in April, 2003, after researching seven fascist regimes: Hitler's Nazi Germany; Mussolini's Italy; Franco's Spain; Salazar's Portugal; Papadopoulos' Greece; Pinochet's Chile; Suharto's Indonesia. Get involved! Text: Early Warning Signs of Fascism Powerful and Continuing Nationalism Disdain For Human Rights Identification of Enemies As a unifying cause Supremacy of the military Rampant Sexism Controlled Mass Media Obsession With National Security
Political and social scientist Stefania Milan writes about social movements, mobilization and organized collective action. On the one hand, interactions and networks achieve more visibility and become a proxy for a collective we . On the other hand: Law enforcement can exercise preemptive monitorin
How new technologies and techniques pioneered by dictators will shape the 2020 election
A regional election offers lessons on combatting the rise of the far right, both across the Continent and in the United States.
The Italian diaspora is the large-scale emigration of Italians from Italy. There are two major Italian diasporas in Italian history. The first diaspora began more or less around 1880, a decade or so after the Unification of Italy (with most leaving after 1880), and ended in the 1920s to early-1940s with the rise of Fascism in Italy. The second diaspora started after the end of World War II and roughly concluded in the 1970s. These together constituted the largest voluntary emigration period in documented history. Between 1880-1980, about 15,000,000 Italians left the country permanently. By 1980, it was estimated that about 25,000,000 Italians were residing outside Italy. A third wave is being reported in present times, due to the socio-economic problems caused by the financial crisis of the early twenty-first century, especially amongst the youth. According to the Public Register of Italian Residents Abroad (AIRE), figures of Italians abroad rose from 3,106,251 in 2006 to 4,636,647 in 2015, growing by 49.3% in just ten years.
# Select the most recent bullseye image for arm64 instance types:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='arm64'] [?starts_with(Name, 'debian-11-')] max_by([], &Name)"
"Architecture": "arm64",
"CreationDate": "2020-03-04T05:31:12.000Z",
"ImageId": "ami-056a2fe946ef98607",
"ImageLocation": "903794441882/debian-11-arm64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
"DeviceName": "/dev/xvda",
"Ebs":
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-0d7a569b159964d87",
"VolumeSize": 8,
"VolumeType": "gp2"
],
"Description": "Debian 11 (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-11-arm64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
# Similarly, select the most recent sid amd64 AMI:
$ aws ec2 describe-images --owner 903794441882 \
--region us-east-1 --output json \
--query "Images[?Architecture=='x86_64'] [?starts_with(Name, 'debian-sid-')] max_by([], &Name)"
"Architecture": "x86_64",
"CreationDate": "2020-03-04T05:13:58.000Z",
"ImageId": "ami-00ec9272298ca9059",
"ImageLocation": "903794441882/debian-sid-amd64-daily-20200304-189",
"ImageType": "machine",
"Public": true,
"OwnerId": "903794441882",
"State": "available",
"BlockDeviceMappings": [
"DeviceName": "/dev/xvda",
"Ebs":
"Encrypted": false,
"DeleteOnTermination": true,
"SnapshotId": "snap-07c3fad3ff835248a",
"VolumeSize": 8,
"VolumeType": "gp2"
],
"Description": "Debian sid (daily build 20200304-189)",
"EnaSupport": true,
"Hypervisor": "xen",
"Name": "debian-sid-amd64-daily-20200304-189",
"RootDeviceName": "/dev/xvda",
"RootDeviceType": "ebs",
"SriovNetSupport": "simple",
"VirtualizationType": "hvm"
If you re using Microsoft Azure images, you can inspect the images with az vm image list
and az vm image show
, as follows:
$ az vm image list -o table --publisher debian --offer debian-sid-daily --location westeurope --all sort -k 5 tail
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200228.184 0.20200228.184
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200229.185 0.20200229.185
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200301.186 0.20200301.186
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200302.187 0.20200302.187
debian-sid-daily Debian sid Debian:debian-sid-daily:sid:0.20200303.188 0.20200303.188
debian-sid-daily Debian sid-gen2 Debian:debian-sid-daily:sid-gen2:0.20200303.188 0.20200303.188
Offer Publisher Sku Urn Version
$ az vm image show --location westeurope --urn debian:debian-sid-daily:sid:latest
"automaticOsUpgradeProperties":
"automaticOsUpgradeSupported": false
,
"dataDiskImages": [],
"hyperVgeneration": "V1",
"id": "/Subscriptions/428325bd-cc87-41f1-b0d8-8caf8bb80b6b/Providers/Microsoft.Compute/Locations/westeurope/Publishers/debian/ArtifactTypes/VMImage/Offers/debian-sid-daily/Skus/sid/Versions/0.20200303.188",
"location": "westeurope",
"name": "0.20200303.188",
"osDiskImage":
"operatingSystem": "Linux",
"sizeInBytes": 32212255232,
"sizeInGb": 30
,
"plan": null,
"tags": null
More information about cloud computing with Debian is available on the wiki.
Next.